107 research outputs found

    Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures

    Get PDF
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them

    Automated Influence and the Challenge of Cognitive Security

    Get PDF
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach

    Technology, autonomy, and manipulation

    Get PDF
    Since 2016, when the Facebook/Cambridge Analytica scandal began to emerge, public concern has grown around the threat of “online manipulation”. While these worries are familiar to privacy researchers, this paper aims to make them more salient to policymakers — first, by defining “online manipulation”, thus enabling identification of manipulative practices; and second, by drawing attention to the specific harms online manipulation threatens. We argue that online manipulation is the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting their decision-making vulnerabilities. Engaging in such practices can harm individuals by diminishing their economic interests, but its deeper, more insidious harm is its challenge to individual autonomy. We explore this autonomy harm, emphasising its implications for both individuals and society, and we briefly outline some strategies for combating online manipulation and strengthening autonomy in an increasingly digital world

    Online Manipulation: Hidden Influences in a Digital World

    Get PDF
    Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media enable. In this paper, we develop a definition of manipulation that addresses these enhanced capacities, investigate how information technologies facilitate manipulative practices, and describe the harms—to individuals and to social institutions—that flow from such practices. We use the term “online manipulation” to highlight the particular class of manipulative practices enabled by a broad range of information technologies. We argue that at its core, manipulation is hidden influence—the covert subversion of another person’s decision-making power. We argue that information technology, for a number of reasons, makes engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating. And we argue that by subverting another person’s decision-making power, manipulation undermines his or her autonomy. Given that respect for individual autonomy is a bedrock principle of liberal democracy, the threat of online manipulation is a cause for grave concern

    Notice After Notice-and-Consent: Why Privacy Disclosures Are Valuable Even If Consent Frameworks Aren’t

    Get PDF
    The dominant legal and regulatory approach to protecting information privacy is a form of mandated disclosure commonly known as “notice-and-consent.” Many have criticized this approach, arguing that privacy decisions are too complicated, and privacy disclosures too convoluted, for individuals to make meaningful consent decisions about privacy choices—decisions that often require us to waive important rights. While I agree with these criticisms, I argue that they only meaningfully call into question the “consent” part of notice-and-consent, and that they say little about the value of notice. We ought to decouple notice from consent, and imagine notice serving other normative ends besides readying people to make informed consent decisions

    Information Privacy and Social Self-Authorship

    Get PDF
    The dominant approach in privacy theory defines information privacy as some form of control over personal information. In this essay, I argue that the control approach is mistaken, but for different reasons than those offered by its other critics. I claim that information privacy involves the drawing of epistemic boundaries—boundaries between what others should and shouldn’t know about us. While controlling what information others have about us is one strategy we use to draw such boundaries, it is not the only one. We conceal information about ourselves and we reveal it. And since the meaning of information is not self-evident, we also work to shape how others contextualize and interpret the information about us that they have. Information privacy is thus about more than controlling information; it involves the constant work of producing and managing public identities, what I call “social self- authorship.” In the second part of the essay, I argue that thinking about information privacy in terms of social self- authorship helps us see ways that information technology threatens privacy, which the control approach misses. Namely, information technology makes social self- authorship invisible and unnecessary, by making it difficult for us to know when others are forming impressions about us, and by providing them with tools for making assumptions about who we are which obviate the need for our involvement in the process

    Ethical Considerations for Digitally Targeted Public Health Interventions

    Get PDF
    Public health scholars and public health officials increasingly worry about health-related misinformation online, and they are searching for ways to mitigate it. Some have suggested that the tools of digital influence are themselves a possible answer: we can use targeted, automated digital messaging to counter health-related misinformation and promote accurate information. In this commentary, I raise a number of ethical questions prompted by such proposals—and familiar from the ethics of influence and ethics of AI—highlighting hidden costs of targeted digital messaging that ought to be weighed against the health benefits they promise

    Transparent Media and the Development of Digital Habits

    Get PDF
    Our lives are guided by habits. Most of the activities we engage in throughout the day are initiated and carried out not by rational thought and deliberation, but through an ingrained set of dispositions or patterns of action—what Aristotle calls a hexis. We develop these dispositions over time, by acting and gauging how the world responds. I tilt the steering wheel too far and the car’s lurch teaches me how much force is needed to steady it. I come too close to a hot stove and the burn I get inclines me not to get too close again. This feedback and the habits it produces are bodily. They are possible because the medium through which these actions take place is a physical, sensible one. The world around us is, in the language of postphenomenology, an opaque one. We notice its texture and contours as we move through it, and crucially, we bump up against it from time to time. The digital world, by contrast, is largely transparent. Digital media are designed to recede from view. As a result, we experience little friction as we carry out activities online; the consequences of our actions are often not apparent to us. This distinction between the opacity of the natural world and the transparency of the digital one raises important questions. In this chapter, I ask: how does the transparency of digital media affect our ability to develop healthy habits online? If the digital world is constructed precisely not to push back against us, how are we supposed to gauge whether our actions are good or bad, for us and for others? The answer to this question has important ramifications for a number of ethical, political, and policy debates around issues in online life. For in order to advance cherished norms like privacy, civility, and fairness online, we need more than good laws and good policies—we need good habits, which dispose us to act in ways conducive to our and others’ flourishing

    Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making

    Get PDF
    The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority? Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation

    Ihde’s Missing Sciences: Postphenomenology, Big Data, and the Human Sciences

    Get PDF
    In Husserl’s Missing Technologies, Don Ihde urges us to think deeply and critically about the ways in which the technologies utilized in contemporary science structure the way we perceive and understand the natural world. In this paper, I argue that we ought to extend Ihde’s analysis to consider how such technologies are changing the way we perceive and understand ourselves too. For it is not only the natural or “hard” sciences which are turning to advanced technologies for help in carrying out their work, but also the social and “human” sciences. One set of tools in particular is rapidly being adopted—the family of information technologies that fall under the umbrella of “big data.” As in the natural sciences, big data is giving researchers in the human sciences access to phenomena which they would otherwise be unable to experience and investigate. And like the former, the latter thereby shape the ways those scientists perceive and understand who and what we are. Looking at two case studies of big data-driven research in the human sciences, I begin in this paper to suggest how we might understand these phenomenological and hermeneutic changes
    • …
    corecore